首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2476篇
  免费   212篇
  国内免费   2篇
工业技术   2690篇
  2024年   6篇
  2023年   37篇
  2022年   26篇
  2021年   135篇
  2020年   86篇
  2019年   102篇
  2018年   112篇
  2017年   99篇
  2016年   136篇
  2015年   88篇
  2014年   148篇
  2013年   228篇
  2012年   191篇
  2011年   193篇
  2010年   143篇
  2009年   136篇
  2008年   124篇
  2007年   120篇
  2006年   76篇
  2005年   65篇
  2004年   70篇
  2003年   46篇
  2002年   58篇
  2001年   31篇
  2000年   25篇
  1999年   21篇
  1998年   16篇
  1997年   23篇
  1996年   8篇
  1995年   23篇
  1994年   16篇
  1993年   11篇
  1992年   9篇
  1991年   9篇
  1990年   10篇
  1989年   10篇
  1988年   8篇
  1987年   9篇
  1986年   7篇
  1985年   7篇
  1984年   4篇
  1983年   3篇
  1982年   3篇
  1981年   4篇
  1979年   2篇
  1977年   1篇
  1975年   1篇
  1972年   1篇
  1971年   1篇
  1969年   1篇
排序方式: 共有2690条查询结果,搜索用时 31 毫秒
31.
The aim of this research is to analyse the effectiveness of the Chicago Board Options Exchange Market Volatility Index (VIX) when used with Support Vector Machines (SVMs) in order to forecast the weekly change in the S&P 500 index. The data provided cover the period between 3 January 2000 and 30 December 2011. A trading simulation is implemented so that statistical efficiency is complemented by measures of economic performance. The inputs retained are traditional technical trading rules commonly used in the analysis of equity markets such as Relative Strength Index, Moving Average Convergence Divergence, VIX and the daily return of the S&P 500. The SVM identifies the best situations in which to buy or sell in the market. The two outputs of the SVM are the movement of the market and the degree of set membership. The obtained results show that SVM using VIX produce better results than the Buy and Hold strategy or SVM without VIX. The influence of VIX in the trading system is particularly significant when bearish periods appear. Moreover, the SVM allows the reduction in the Maximum Drawdown and the annualised standard deviation.  相似文献   
32.
Algorithms for numeric data classification have been applied for text classification. Usually the vector space model is used to represent text collections. The characteristics of this representation such as sparsity and high dimensionality sometimes impair the quality of general-purpose classifiers. Networks can be used to represent text collections, avoiding the high sparsity and allowing to model relationships among different objects that compose a text collection. Such network- based representations can improve the quality of the classification results. One of the simplest ways to represent textual collections by a network is through a bipartite heterogeneous network, which is composed of objects that represent the documents connected to objects that represent the terms. Heterogeneous bipartite networks do not require computation of similarities or relations among the objects and can be used to model any type of text collection. Due to the advantages of representing text collections through bipartite heterogeneous networks, in this article we present a text classifier which builds a classification model using the structure of a bipartite heterogeneous network. Such an algorithm, referred to as IMBHN (Inductive Model Based on Bipartite Heterogeneous Network), induces a classification model assigning weights to objects that represent the terms for each class of the text collection. An empirical evaluation using a large amount of text collections from different domains shows that the proposed IMBHN algorithm produces significantly better results than k-NN, C4.5, SVM, and Naive Bayes algorithms.  相似文献   
33.
34.
Discovering frequent factors from long strings is an important problem in many applications, such as biosequence mining. In classical approaches, the algorithms process a vast database of small strings. However, in this paper we analyze a small database of long strings. The main difference resides in the high number of patterns to analyze. To tackle the problem, we have developed a new algorithm for discovering frequent factors in long strings. We present an Apriori-like solution which exploits the fact that any super-pattern of a non-frequent pattern cannot be frequent. The SANSPOS algorithm does a multiple-pass, candidate generation and test approach. Multiple length patterns can be generated in a pass. This algorithm uses a new data structure to arrange nodes in a trie. A Positioning Matrix is defined as a new positioning strategy. By using Positioning Matrices, we can apply advanced prune heuristics in a trie with a minimal computational cost. The Positioning Matrices let us process strings including Short Tandem Repeats and calculate different interestingness measures efficiently. Furthermore, in our algorithm we apply parallelism to transverse different sections of the input strings concurrently, speeding up the resulting running time. The algorithm has been successfully used in natural language and biological sequence contexts.  相似文献   
35.
Product development of today is becoming increasingly knowledge intensive. Specifically, design teams face considerable challenges in making effective use of increasing amounts of information. In order to support product information retrieval and reuse, one approach is to use case-based reasoning (CBR) in which problems are solved “by using or adapting solutions to old problems.” In CBR, a case includes both a representation of the problem and a solution to that problem. Case-based reasoning uses similarity measures to identify cases which are more relevant to the problem to be solved. However, most non-numeric similarity measures are based on syntactic grounds, which often fail to produce good matches when confronted with the meaning associated to the words they compare. To overcome this limitation, ontologies can be used to produce similarity measures that are based on semantics. This paper presents an ontology-based approach that can determine the similarity between two classes using feature-based similarity measures that replace features with attributes. The proposed approach is evaluated against other existing similarities. Finally, the effectiveness of the proposed approach is illustrated with a case study on product–service–system design problems.  相似文献   
36.
Ozonized theobroma fat is used as raw material in the manufacture of pessaries and cosmetic creams. Ozonization of theobroma fat with water was carried out at different applied ozone dosages, and the resultant PV, acid value, iodine value, total hydroperoxide content, and FA content were determined. PV and total hydroperoxide content showed a notable increase with applied ozone dosage up to 35.7 mg/g. Acid value varied slightly from 4.1 to 9.9 mg KOH/g, and the iodine value fell to zero. PV and total hydroperoxide content increased slightly with a higher applied ozone dosage. The comparison of total hydroperoxide measurement using ferrous oxidation in xylenol orange assay and traditional iodometric assay for PV determination showed a significant linear correlation. Small amounts of oleic acid were found in ozonized theobroma fat samples with iodine value equaling zero, which demonstrated that iodine value determination is an inexact assay. During ozonization of theobroma fat, an increase in acid value of 18.9-fold with respect to the initial value was observed owing to decomposition of peroxide.  相似文献   
37.
Global Software Engineering has become a standard in today’s software industry. Research in distributed software development poses severe challenges that are due to the spatial and temporal distribution of the actors, as well as to language, intercultural and organizational aspects. These challenges occur in addition to “traditional” challenges of the domain itself in large-scale software projects, like coordination and communication issues, requirements volatily, lack of domain knowledge, among others. While several authors have reported empirical studies of global software development projects, the methodological difficulties and challenges of this type of studies have not been sufficiently discussed. In this paper, we share our experiences of collecting and analysing qualitative data in the context of Global Software Engineering projects. We discuss strategies for gaining access to field sites, building trust and documenting distributed and complex work practices in the context of several research projects we have conducted in the past 9 years. The experiences described in this paper illustrate the need to deal with fundamental problems, such as understanding local languages and different cultures, observing synchronous interaction, or dealing with barriers imposed by political conflicts between the sites. Based on our findings, we discuss some practical implications and strategies that can be used by other researchers and provide some recommendations for future research in methodological aspects of Global Software Engineering.  相似文献   
38.
The synthesis of the title compound13 has been carried out through the preparation of its precursor, (3R,4R,5S,6R)-3,4,5-trihydroxy-1,7-dioxaspiro[5.5]undecane (6), obtained fromd-fructose using Wittig's methodology, reduction, and spiroketalation. Compound6 was transformed into13 by a Barton deoxygenation at C-5 followed by a Corey dideoxygenation at C-3,4 of the appropriately protected derivatives.Enantiospecific synthesis of spiroacetals. Part II. For Part I, see Izquierdo and Plaza (1990).  相似文献   
39.
Extraction of rice brain oil using supercritical carbon dioxide and propane   总被引:1,自引:0,他引:1  
Extraction of rice bran lipids was performed using supercritical carbon dioxide (SC−CO2) and liquid propane. To provide a basis for extraction efficiency, accelerated solvent extraction with hexane was performed at 100°C and 10.34 MPa. Extraction pressure was varied for propane and SC−CO2 extractions. Also, the role of temperature in SC−CO2 extraction efficiency was investigated at 45,65, and 85°C. For the SC−CO2 experiments, extraction efficiencies were proportional to pressure and inversely proportional to temperature, and the maximal yield of oil achieved using SC−CO2 was 0.222±0.013 kg of oil extracted per kg of rice bran for conditions of 45°C and 35 MPa. The maximal yield achieved with propane was 0.224±0.016 kg of oil per kg of rice bran at 0.76 MPa and ambient temperature. The maximum extraction efficiencies of both SC−CO2 and propane were found to be significantly different from the hexane extraction baseline yield, which was 0.261±0.005 kg oil extracted per kg of rice bran. A simulated economic analysis was performed on the possibility of using SC−CO2 and propane extraction technologies to remove oil from rice bran generated in Mississippi. Although the economic analysis was based on the maximal extraction efficiency for each technology, neither process resulted in a positive rate of return on investment.  相似文献   
40.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号